A view of margin losses as regularizers of probability estimates

نویسندگان

  • Hamed Masnadi-Shirazi
  • Nuno Vasconcelos
چکیده

Regularization is commonly used in classifier design, to assure good generalization. Classical regularization enforces a cost on classifier complexity, by constraining parameters. This is usually combined with a margin loss, which favors large-margin decision rules. A novel and unified view of this architecture is proposed, by showing that margin losses act as regularizers of posterior class probabilities, in a way that amplifies classical parameter regularization. The problem of controlling the regularization strength of a margin loss is considered, using a decomposition of the loss in terms of a link and a binding function. The link function is shown to be responsible for the regularization strength of the loss, while the binding function determines its outlier robustness. A large class of losses is then categorized into equivalence classes of identical regularization strength or outlier robustness. It is shown that losses in the same regularization class can be parameterized so as to have tunable regularization strength. This parameterization is finally used to derive boosting algorithms with loss regularization (BoostLR). Three classes of tunable regularization losses are considered in detail. Canonical losses can implement all regularization behaviors but have no flexibility in terms of outlier modeling. Shrinkage losses support equally parameterized link and binding functions, leading to boosting algorithms that implement the popular shrinkage procedure. This offers a new explanation for shrinkage as a special case of loss-based regularization. Finally, α-tunable losses enable the independent parameterization of link and binding functions, leading to boosting algorithms of great flexibility. This is illustrated by the derivation of an algorithm that generalizes both AdaBoost and LogitBoost, behaving as either one when that best suits the data to classify. Various experiments provide evidence of the benefits of probability regularization for both classification and estimation of posterior class probabilities.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularizers versus Losses for Nonlinear Dimensionality Reduction: A Factored View with New Convex Relaxations

We demonstrate that almost all nonparametric dimensionality reduction methods can be expressed by a simple procedure: regularized loss minimization plus singular value truncation. By distinguishing the role of the loss and regularizer in such a process, we recover a factored perspective that reveals some gaps in the current literature. Beyond identifying a useful new loss for manifold unfolding...

متن کامل

شناسایی به‌موقع زیان‌ها، احتمال توقف پروژه‌های شرکت و سودآوری

ارتباط میان شناسایی به‌موقع پروژه‌های زیان و سودآوری شرکت از جمله موضوعات با اهمیت در حوزه حسابداری است. علاوه بر این مسئله سرمایه‌گذاری و ارزیابی مالی طرح‌های سرمایه‌گذاری یکی از مهمترین وظایف مدیران شرکت می‌باشد. ادبیات پیشین نشان ‌داد شناسایی به‌موقع زیان‌ها، مدیر را در تشخیص و کنار گذاشتن به‌موقع پروژه‌های زیان‌ده، کمک می‌کند و به‌طور بالقوه بر رفتار سرمایه‌گذاری مدیران واحدهای انتفاعی تأثی...

متن کامل

Wavelet Based Estimation of the Derivatives of a Density for a Discrete-Time Stochastic Process: Lp-Losses

We propose a method of estimation of the derivatives of probability density based on wavelets methods for a sequence of random variables with a common one-dimensional probability density function and obtain an upper bound on Lp-losses for such estimators. We suppose that the process is strongly mixing and we show that the rate of convergence essentially depends on the behavior of a special quad...

متن کامل

سیب آدم برجسته و نمای لارنگوسکوپی: بررسی 535 بیمار

Normal 0 false false false EN-GB X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal" mso-tstyle-rowband-size:0 mso-tstyle-colband-size:0 mso-style-noshow:yes mso-style-priority:99 mso-style-qformat:yes m...

متن کامل

Large-Margin Metric Learning for Partitioning Problems

In this paper, we consider unsupervised partitioning problems, such as clustering, image segmentation, video segmentation and other change-point detection problems. We focus on partitioning problems based explicitly or implicitly on the minimization of Euclidean distortions, which include mean-based change-point detection, K-means, spectral clustering and normalized cuts. Our main goal is to le...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Journal of Machine Learning Research

دوره 16  شماره 

صفحات  -

تاریخ انتشار 2015